Tensor-based Sequential Learning via Hankel Matrix Representation for Next Item Recommendations. (arXiv:2212.05720v1 [cs.LG])
Self-attentive transformer models have recently been shown to solve the next
item recommendation task very efficiently. The learned attention weights
capture sequential dynamics in user behavior and generalize well. Motivated by
the special structure of learned parameter space, we question if it is possible
to mimic it with an alternative and more lightweight approach. We develop a new
tensor factorization-based model that ingrains the structural knowledge about
sequential data within the learning process. We demonstrate how certain
properties of a self-attention network can be reproduced with our approach
based on special Hankel matrix representation. The resulting model has a
shallow linear architecture and compares competitively to its neural
counterpart.
( 2
min )